Uniform Markov renewal theory and ruin probabilities in Markov random walks

نویسندگان

چکیده

برای دانلود باید عضویت طلایی داشته باشید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Uniform Markov Renewal Theory and Ruin Probabilities in Markov Random Walks

Let {Xn, n≥ 0} be a Markov chain on a general state space X with transition probability P and stationary probability π. Suppose an additive component Sn takes values in the real line R and is adjoined to the chain such that {(Xn, Sn), n≥ 0} is a Markov random walk. In this paper, we prove a uniform Markov renewal theorem with an estimate on the rate of convergence. This result is applied to bou...

متن کامل

Markov Chains and Random Walks

Let G = (V,E) be a connected, undirected graph with n vertices and m edges. For a vertex v ∈ V , Γ(v) denotes the set of neighbors of v in G. A random walk on G is the following process, which occurs in a sequence of discrete steps: starting at a vertex v0, we proceed at the first step to a random edge incident on v0 and walking along it to a vertex v1, and so on. ”Random chosen neighbor” will ...

متن کامل

Random walks and Markov chains

This lecture discusses Markov chains, which capture and formalize the idea of a memoryless random walk on a finite number of states, and which have wide applicability as a statistical model of many phenomena. Markov chains are postulated to have a set of possible states, and to transition randomly from one state to a next state, where the probability of transitioning to a particular next state ...

متن کامل

Lecture 7: Markov Chains and Random Walks

A transition probability Pij corresponds to the probability that the state at time step t+1 will be j, given that the state at time t is i. Therefore, each row in the matrix M is a distribution and ∀i, j ∈ SPij ≥ 0 and ∑ j Pij = 1. Let the initial distribution be given by the row vector x ∈ R, xi ≥ 0 and ∑ i xi = 1. After one step, the new distribution is xM. It is easy to see that xM is again ...

متن کامل

Computation of Ruin Probabilities for General Discrete-time Markov Models

We study the ruin problem over a risk process described by a discrete-time Markov model. In contrast to previous studies that focused on the asymptotic behaviour of ruin probabilities for large values of the initial capital, we provide a new technique to compute the quantity of interest for any initial value, and with any given precision. Rather than focusing on a particular model for risk proc...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: The Annals of Applied Probability

سال: 2004

ISSN: 1050-5164

DOI: 10.1214/105051604000000260